Connection Strings
From Scratch
Every connection string format across databases, cloud services, message brokers, caches, and protocols β with anatomy breakdowns and copy-ready examples.
Relational SQL Databases
Traditional structured databases with JDBC, ODBC, and driver-specific formats
# Standard URL format (PostgreSQL + libpq) postgresql://username:password@localhost:5432/mydb # With options postgresql://user:pass@host:5432/db?sslmode=require&connect_timeout=10 # postgres:// also works (alias) postgres://user:pass@host/db
# libpq key=value format host=localhost port=5432 dbname=mydb user=postgres password=secret sslmode=require connect_timeout=10
import psycopg2 conn = psycopg2.connect( "postgresql://user:pass@localhost/db" ) # SQLAlchemy engine from sqlalchemy import create_engine engine = create_engine( "postgresql+psycopg2://user:pass@host:5432/db" )
# sslmode options: disable, allow, prefer, require, verify-ca, verify-full postgresql://user:pass@prod-server:5432/db ?sslmode=verify-full &sslcert=/path/to/client.crt &sslkey=/path/to/client.key &sslrootcert=/path/to/ca.crt
# Standard mysql://user:password@localhost:3306/mydb # With SSL and charset mysql://user:pass@host:3306/db ?ssl=true&charset=utf8mb4 # MariaDB (same format) mariadb://user:pass@host:3306/db
Driver={MySQL ODBC 8.0 Driver}; Server=localhost; Port=3306; Database=mydb; User=root; Password=secret; Option=3;
# mysql-connector-python import mysql.connector conn = mysql.connector.connect( host="localhost", port=3306, user="root", password="secret", database="mydb" ) # SQLAlchemy "mysql+mysqlconnector://user:pass@host/db" "mysql+pymysql://user:pass@host/db"
// Standard SQL auth Server=myserver.database.windows.net; Database=mydb; User Id=myuser; Password=mypassword; Encrypt=True; // Windows Integrated Auth (no password needed) Server=localhost\SQLEXPRESS; Database=mydb; Trusted_Connection=True; TrustServerCertificate=True;
Driver={ODBC Driver 18 for SQL Server}; Server=tcp:myserver,1433; Database=mydb; Uid=myuser; Pwd={my_password}; Encrypt=yes; TrustServerCertificate=no;
# Python SQLAlchemy "mssql+pyodbc://user:pass@server/db?driver=ODBC+Driver+18+for+SQL+Server" // Node.js (mssql package) const config = { server: 'myserver', database: 'mydb', user: 'user', password: 'pass', port: 1433, options: { encrypt: true, trustServerCertificate: false } };
# Relative path sqlite:///relative/path/to/db.sqlite # Absolute path (note 4 slashes on Unix) sqlite:////home/user/mydb.sqlite # In-memory (lost when connection closes) sqlite:///:memory: # Python built-in import sqlite3 conn = sqlite3.connect("mydb.sqlite") conn = sqlite3.connect(":memory:")
// JDBC URL jdbc:oracle:thin:@hostname:1521:ORCL // With service name (preferred in modern Oracle) jdbc:oracle:thin:@//hostname:1521/service_name # Python (cx_Oracle / python-oracledb) DSN = "hostname:1521/service_name" conn = cx_Oracle.connect( user="scott", password="tiger", dsn=DSN ) # SQLAlchemy "oracle+cx_oracle://user:pass@host:1521/?service_name=orcl"
# Supabase direct connection postgresql://postgres:[password]@ db.xxxx.supabase.co:5432/postgres # Supabase pooled (transaction mode via PgBouncer) postgresql://postgres:[password]@ aws-0-us-east-1.pooler.supabase.com:6543/postgres # Neon (serverless Postgres) postgresql://user:pass@ep-xxx.us-east-2.aws.neon.tech/dbname ?sslmode=require # PlanetScale (MySQL-compatible) mysql://user:pass@xxx.connect.psdb.cloud/mydb ?ssl={"rejectUnauthorized":true}
NoSQL / Document Databases
MongoDB, Firestore, DynamoDB and other schema-flexible stores
# Basic mongodb://localhost:27017/mydb # With auth mongodb://user:password@localhost:27017/mydb ?authSource=admin # Replica set mongodb://user:pass@host1:27017,host2:27017/db ?replicaSet=myReplSet&readPreference=secondaryPreferred
# Atlas uses +srv for DNS-based discovery (no port needed) mongodb+srv://user:password@ cluster0.xxxxx.mongodb.net/mydb ?retryWrites=true&w=majority &appName=MyApp
# Python from pymongo import MongoClient client = MongoClient("mongodb+srv://user:pass@cluster/db") db = client["mydb"] // Mongoose (Node.js) await mongoose.connect( "mongodb+srv://user:pass@cluster/db", { useNewUrlParser: true, useUnifiedTopology: true } );
# NoSQL API (native SDK) from azure.cosmos import CosmosClient client = CosmosClient( url="https://myaccount.documents.azure.com:443/", credential="PRIMARY_KEY_HERE" ) # MongoDB API (use mongo URI with SSL) mongodb://account:PRIMARY_KEY@ account.mongo.cosmos.azure.com:10255/mydb ?ssl=true&retrywrites=false # Connection string from portal AccountEndpoint=https://account.documents.azure.com:443/; AccountKey=BASE64KEY==;
// Client SDK (browser/React Native) import { initializeApp } from "firebase/app"; const firebaseConfig = { apiKey: "AIzaSy...", authDomain: "myapp.firebaseapp.com", projectId: "myapp-12345", storageBucket: "myapp.appspot.com", messagingSenderId: "123456789", appId: "1:123:web:abc..." }; const app = initializeApp(firebaseConfig); # Python Admin SDK (server-side) import firebase_admin cred = firebase_admin.credentials.Certificate( "serviceAccountKey.json" ) firebase_admin.initialize_app(cred)
# Python (boto3) import boto3 dynamodb = boto3.resource( "dynamodb", region_name="us-east-1", aws_access_key_id="AKIAIOSFODNN7", aws_secret_access_key="wJalrXUtnFEMI" ) # Or use environment variables (preferred) # AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION # Local DynamoDB for dev dynamodb = boto3.resource( "dynamodb", endpoint_url="http://localhost:8000" )
Vector & Graph Databases
AI-native stores β semantic search, embeddings, and knowledge graphs
import chromadb # In-memory (ephemeral β for testing) client = chromadb.EphemeralClient() # Persistent local (file-backed) client = chromadb.PersistentClient( path="/path/to/chroma_data" ) # Remote HTTP client (ChromaDB server running) client = chromadb.HttpClient( host="localhost", port=8000 ) # Cloud (Chroma Cloud) client = chromadb.cloud.CloudClient( tenant="my-tenant", database="my-db", api_key="chroma-api-key" )
from pinecone import Pinecone # Initialize with API key pc = Pinecone(api_key="your-api-key") # Connect to index index = pc.Index("my-index") # Or specify host directly index = pc.Index( name="my-index", host="https://my-index-xxx.svc.pinecone.io" )
import weaviate from weaviate.auth import AuthApiKey # Local instance client = weaviate.connect_to_local( host="localhost", port=8080, grpc_port=50051 ) # Weaviate Cloud Services (WCS) client = weaviate.connect_to_wcs( cluster_url="https://cluster.weaviate.network", auth_credentials=AuthApiKey("wcs-api-key") )
from neo4j import GraphDatabase # Bolt protocol (default, binary, fast) driver = GraphDatabase.driver( "bolt://localhost:7687", auth=("neo4j", "password") ) # Neo4j Aura (cloud, TLS) driver = GraphDatabase.driver( "neo4j+s://xxxxx.databases.neo4j.io", auth=("neo4j", "password") ) # neo4j:// β bolt, no TLS # neo4j+s:// β bolt with TLS # neo4j+ssc:// β bolt with TLS, no cert validation
Azure Cloud Services
Storage, Service Bus, Event Hubs, AI Services and more
# Full connection string (from Azure Portal) DefaultEndpointsProtocol=https; AccountName=mystorageaccount; AccountKey=BASE64_KEY==; EndpointSuffix=core.windows.net # Blob-specific URL (with SAS token) https://account.blob.core.windows.net/container ?sv=2021-06-08&ss=b&srt=sco&sp=rwdlacuptfx &se=2024-12-31T00%3A00%3A00Z&sig=SIGNATURE # Python SDK (preferred β Managed Identity) from azure.storage.blob import BlobServiceClient from azure.identity import DefaultAzureCredential client = BlobServiceClient( account_url="https://account.blob.core.windows.net", credential=DefaultAzureCredential() )
# Shared Access Signature (SAS) Endpoint=sb://mynamespace.servicebus.windows.net/; SharedAccessKeyName=RootManageSharedAccessKey; SharedAccessKey=BASE64_KEY= # Python SDK from azure.servicebus import ServiceBusClient client = ServiceBusClient.from_connection_string( conn_str="Endpoint=sb://..." ) sender = client.get_queue_sender( queue_name="my-queue" )
# Namespace-level (can send to any hub) Endpoint=sb://mynamespace.servicebus.windows.net/; SharedAccessKeyName=RootManageSharedAccessKey; SharedAccessKey=BASE64_KEY= # Entity-level (specific event hub) Endpoint=sb://mynamespace.servicebus.windows.net/; SharedAccessKeyName=send; SharedAccessKey=KEY=; EntityPath=my-event-hub
# ADO.NET Server=tcp:myserver.database.windows.net,1433; Initial Catalog=mydb; Persist Security Info=False; User ID=myuser; Password=mypassword; MultipleActiveResultSets=False; Encrypt=True; TrustServerCertificate=False; Connection Timeout=30;
from openai import AzureOpenAI client = AzureOpenAI( api_key="AZURE_OPENAI_KEY", api_version="2024-02-01", azure_endpoint="https://myresource.openai.azure.com/" ) # ENV VARS: AZURE_OPENAI_API_KEY # AZURE_OPENAI_ENDPOINT, OPENAI_API_VERSION
from azure.search.documents import SearchClient client = SearchClient( endpoint="https://myservice.search.windows.net", index_name="my-index", credential=AzureKeyCredential("ADMIN_KEY") )
AWS Services
S3, RDS, SQS and AWS connection patterns
# S3 URI format (used in many tools) s3://my-bucket/path/to/object.csv # boto3 (uses AWS credentials from env/config) import boto3 s3 = boto3.client( 's3', region_name='us-east-1', aws_access_key_id='AKIAIOSFODNN7', aws_secret_access_key='wJalrXUtnFEMI' ) # Best practice: use ~/.aws/credentials or IAM role
# SQS Queue URL format https://sqs.us-east-1.amazonaws.com/ 123456789012/MyQueueName # boto3 usage sqs = boto3.client('sqs', region_name='us-east-1') sqs.send_message( QueueUrl="https://sqs.us-east-1.amazonaws.com/...", MessageBody="Hello" )
# RDS uses standard DB connection strings β just with RDS hostname # PostgreSQL on RDS postgresql://admin:pass@ mydb.cluster-xxx.us-east-1.rds.amazonaws.com :5432/mydb # MySQL on RDS mysql://admin:pass@ mydb.xxx.us-east-1.rds.amazonaws.com :3306/mydb
Message Brokers
RabbitMQ, Kafka, NATS β async communication between services
# AMQP (plain) amqp://user:password@localhost:5672/vhost # AMQPS (TLS) amqps://user:password@broker.host.com:5671/vhost # Python (pika) import pika params = pika.URLParameters( "amqp://guest:guest@localhost:5672/%2F" ) conn = pika.BlockingConnection(params)
# Kafka uses bootstrap servers list (no URI scheme) bootstrap.servers=broker1:9092,broker2:9092 # Python (confluent-kafka) from confluent_kafka import Producer p = Producer({ "bootstrap.servers": "localhost:9092" }) # With SASL/SSL (cloud Kafka) { "bootstrap.servers": "broker.cloud:9092", "security.protocol": "SASL_SSL", "sasl.mechanism": "PLAIN", "sasl.username": "API_KEY", "sasl.password": "API_SECRET" }
# Basic NATS nats://localhost:4222 # With credentials nats://user:token@nats.example.com:4222 # TLS tls://nats.example.com:4222 # Python import nats nc = await nats.connect("nats://localhost:4222")
Cache & Key-Value Stores
Redis, Memcached, and in-memory data structures
# Basic (no auth) redis://localhost:6379 # With password redis://:mypassword@localhost:6379 # With username (Redis 6+) redis://user:password@localhost:6379 # With DB index (0-15) redis://localhost:6379/0 # TLS (rediss://) rediss://user:pass@redis.cloud.com:6380
import redis # From URL r = redis.Redis.from_url("redis://localhost:6379/0") # Parameters r = redis.Redis( host="localhost", port=6379, db=0, password="secret", decode_responses=True ) # Async (aioredis) import redis.asyncio as aioredis r = await aioredis.from_url("redis://localhost")
# Redis Cluster from redis.cluster import RedisCluster rc = RedisCluster( startup_nodes=[ {"host": "node1", "port": "6379"}, {"host": "node2", "port": "6379"} ], decode_responses=True ) # Upstash (serverless Redis, HTTPS) rediss://default:TOKEN@xxx.upstash.io:6379
# Memcached has no URI standard β uses host:port tuples # Python (pymemcache) from pymemcache.client.base import Client client = Client(("localhost", 11211)) # Multiple servers (pooling) from pymemcache.client.hash import HashClient client = HashClient([ ("server1", 11211), ("server2", 11211) ])
Network Protocols & Transfer
FTP, SSH/SFTP, SMTP, LDAP β lower-level connection strings
# FTP ftp://user:password@ftp.example.com:21/path/to/dir # FTPS (FTP over TLS) ftps://user:pass@ftp.example.com:990 # SFTP (SSH File Transfer β different from FTPS) sftp://user:pass@host.com:22/remote/path # Python (paramiko SFTP) import paramiko ssh = paramiko.SSHClient() ssh.connect("host", port=22, username="user", password="pass") sftp = ssh.open_sftp()
# SMTP URIs smtp://smtp.gmail.com:587 # STARTTLS smtps://smtp.gmail.com:465 # SSL/TLS # Python smtplib import smtplib smtp = smtplib.SMTP("smtp.gmail.com", 587) smtp.starttls() smtp.login("user@gmail.com", "app_password") # IMAP (read email) import imaplib imap = imaplib.IMAP4_SSL("imap.gmail.com", 993) imap.login("user@gmail.com", "app_password")
# LDAP URI ldap://ldap.example.com:389 # LDAPS (TLS) ldaps://ldap.example.com:636 # Python (ldap3) from ldap3 import Server, Connection server = Server("ldaps://ad.company.com:636") conn = Connection( server, user="CN=svc,OU=Users,DC=company,DC=com", password="secret", auto_bind=True )
# ws:// (plain), wss:// (TLS β always use in production) ws://localhost:8765 wss://api.example.com/ws wss://api.example.com/ws?token=JWT_TOKEN # Python (websockets) import websockets async with websockets.connect( "wss://echo.websocket.org" ) as ws: await ws.send("Hello!") msg = await ws.recv() // JavaScript const ws = new WebSocket("wss://api.example.com/ws");
API & Auth Connection Patterns
REST APIs, OAuth, JWT β connecting to services via credentials
# 1. Header (most secure β recommended) Authorization: Bearer YOUR_API_KEY X-API-Key: YOUR_API_KEY # 2. Query param (avoid in production β logs URLs) https://api.service.com/data?api_key=KEY # Python (requests) import requests headers = {"Authorization": f"Bearer {API_KEY}"} resp = requests.get("https://api.example.com/", headers=headers) # OpenAI SDK pattern from openai import OpenAI client = OpenAI(api_key="sk-...")
# OAuth2 token endpoint (Client Credentials flow) POST https://auth.example.com/oauth/token Content-Type: application/x-www-form-urlencoded grant_type=client_credentials &client_id=my-client-id &client_secret=my-client-secret &scope=read:data write:data # Use returned token Authorization: Bearer eyJhbGciOiJSUzI1NiJ9...
from sqlalchemy import create_engine engine = create_engine( "postgresql://user:pass@localhost/db", pool_size=10, # persistent connections max_overflow=20, # extra connections under load pool_timeout=30, # wait time before error pool_recycle=1800, # recycle connections (secs) pool_pre_ping=True # test connection before use )
Files, Search & Misc
Elasticsearch, HDFS, connection string security best practices
from elasticsearch import Elasticsearch # Basic local es = Elasticsearch("http://localhost:9200") # Cloud (Elastic Cloud) es = Elasticsearch( cloud_id="my-cluster:xxx", api_key=("id", "api_key_value") ) # With auth es = Elasticsearch( "https://myhost:9200", basic_auth=("elastic", "password"), ca_certs="/path/to/cert.crt" )
# β NEVER β hardcoded in source code DB_URL = "postgresql://admin:supersecret@prod/db" # β Environment variables import os DB_URL = os.getenv("DATABASE_URL") # β .env file (dev only β add to .gitignore!) # Use python-dotenv from dotenv import load_dotenv load_dotenv() # β Azure Key Vault (production) from azure.keyvault.secrets import SecretClient from azure.identity import DefaultAzureCredential client = SecretClient(vault_url="https://kv.vault.azure.net/", credential=DefaultAzureCredential()) DB_URL = client.get_secret("db-connection-string").value # β AWS Secrets Manager secrets = boto3.client("secretsmanager") secret = secrets.get_secret_value(SecretId="my/db/conn")
| Service | Scheme | Default Port | TLS Port / Scheme | Auth Pattern |
|---|---|---|---|---|
| PostgreSQL | postgresql:// | 5432 | sslmode=require | user:password |
| MySQL | mysql:// | 3306 | ssl=true param | user:password |
| SQL Server | Server=host | 1433 | Encrypt=True | User Id / Trusted_Connection |
| Oracle | jdbc:oracle:thin:@ | 1521 | tcps:// scheme | user/password |
| MongoDB | mongodb:// / mongodb+srv:// | 27017 | mongodb+srv (Atlas) | user:password@host |
| Redis | redis:// | 6379 | rediss:// port 6380 | :password@ or user:pass@ |
| RabbitMQ | amqp:// | 5672 | amqps:// port 5671 | user:password@host/vhost |
| Elasticsearch | http:// | 9200 | https:// + CA cert | basic_auth / api_key |
| Neo4j | bolt:// / neo4j:// | 7687 | neo4j+s:// | user:password |
| SMTP | smtp:// | 587 (STARTTLS) | smtps:// port 465 | login(user, pass) |
| LDAP | ldap:// | 389 | ldaps:// port 636 | DN string + password |
| FTP | ftp:// | 21 | ftps:// port 990 | user:password@host |
| SFTP/SSH | sftp:// | 22 | Always encrypted | user:pass or key file |
| WebSocket | ws:// | 80 (default) | wss:// port 443 | token in URL/header |